Learning Affordance Segmentation for Real-World Robotic Manipulation via Synthetic Images
نویسندگان
چکیده
منابع مشابه
Robotic framework for affordance learning
The report summarises the research progress since the last thesis group meeting held in April this year. As I have mentioned in my previous reports[5, 6] I intend to build a robotic framework, which will allow a robot to incrementally learn object affordances through active observation and interaction with the “toy-world”[5, 7]. The toy-world is an appropriately constrained environment [5], suc...
متن کاملTransfer learning from synthetic to real images using variational autoencoders for robotic applications
Robotic learning in simulation environments provides a faster, more scalable, and safer training methodology than learning directly with physical robots. Also, synthesizing images in a simulation environment for collecting large-scale image data is easy, whereas capturing camera images in the real world is time consuming and expensive. However, learning from only synthetic images may not achiev...
متن کاملLearning a visuomotor controller for real world robotic grasping using simulated depth images
We want to build robots that are useful in unstructured real world applications, such as doing work in the household. Grasping in particular is an important skill in this domain, yet it remains a challenge. One of the key hurdles is handling unexpected changes or motion in the objects being grasped and kinematic noise or other errors in the robot. This paper proposes an approach to learning a c...
متن کاملA Multi-scale CNN for Affordance Segmentation in RGB Images
Given a single RGB image our goal is to label every pixel with an affordance type. By affordance, we mean an object’s capability to readily support a certain human action, without requiring precursor actions. We focus on segmenting the following five affordance types in indoor scenes: ‘walkable’, ‘sittable’, ‘lyable’, ‘reachable’, and ‘movable’. Our approach uses a deep architecture, consisting...
متن کاملSegmentation via manipulation
The motivation for this paper is the observation that a static scene that contains more than one object/part most of the time cannot be segmented only by vision or in general by any non-contact sensing. Exception to this is only the case when the objects/parts are physically separated so that the non-contact sensor can measure this separation or one knows a great deal of a priori knowledge abou...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: IEEE Robotics and Automation Letters
سال: 2019
ISSN: 2377-3766,2377-3774
DOI: 10.1109/lra.2019.2894439